178 research outputs found

    Fusion of Range and Stereo Data for High-Resolution Scene-Modeling

    Get PDF
    This work has received funding from Agence Nationale de la Recherche under the MIXCAM project number ANR-13-BS02-0010-01. Georgios Evangelidis is the corresponding author

    Editorial

    Get PDF

    Surface feature detection and description with applications to mesh matching

    Get PDF
    In this paper we revisit local feature detectors/descriptors developed for 2D images and extend them to the more general framework of scalar fields defined on 2D manifolds. We provide methods and tools to detect and describe features on surfaces equiped with scalar functions, such as photometric information. This is motivated by the growing need for matching and tracking photometric surfaces over temporal sequences, due to recent advancements in multiple camera 3D reconstruction. We propose a 3D feature detector (MeshDOG) and a 3D feature descriptor (MeshHOG) for uniformly triangulated meshes, invariant to changes in rotation, translation, and scale. The descriptor is able to capture the local geometric and/or photometric properties in a succinct fashion. Moreover, the method is defined generically for any scalar function, e.g., local curvature. Results with matching rigid and non-rigid meshes demonstrate the interest of the proposed framework

    Visually guided object grasping

    Full text link

    Estimating Depth from RGB and Sparse Sensing

    Full text link
    We present a deep model that can accurately produce dense depth maps given an RGB image with known depth at a very sparse set of pixels. The model works simultaneously for both indoor/outdoor scenes and produces state-of-the-art dense depth maps at nearly real-time speeds on both the NYUv2 and KITTI datasets. We surpass the state-of-the-art for monocular depth estimation even with depth values for only 1 out of every ~10000 image pixels, and we outperform other sparse-to-dense depth methods at all sparsity levels. With depth values for 1/256 of the image pixels, we achieve a mean absolute error of less than 1% of actual depth on indoor scenes, comparable to the performance of consumer-grade depth sensor hardware. Our experiments demonstrate that it would indeed be possible to efficiently transform sparse depth measurements obtained using e.g. lower-power depth sensors or SLAM systems into high-quality dense depth maps.Comment: European Conference on Computer Vision (ECCV) 2018. Updated to camera-ready version with additional experiment

    Microphone array signal processing for robot audition

    Get PDF
    Robot audition for humanoid robots interacting naturally with humans in an unconstrained real-world environment is a hitherto unsolved challenge. The recorded microphone signals are usually distorted by background and interfering noise sources (speakers) as well as room reverberation. In addition, the movements of a robot and its actuators cause ego-noise which degrades the recorded signals significantly. The movement of the robot body and its head also complicates the detection and tracking of the desired, possibly moving, sound sources of interest. This paper presents an overview of the concepts in microphone array processing for robot audition and some recent achievements

    An overview of depth cameras and range scanners based on time-of-flight technologies

    Get PDF
    “The final publication is available at Springer via http://dx.doi.10.1007/s00138-016-0784-4.This work has received funding from the French Agence Nationale de la Recherche (ANR) under the MIXCAM project ANR-13-BS02-0010-01, and from the European Research Council (ERC) under the Advanced Grant VHIA Project 340113

    Context-Awareness at the Service of Sensor Fusion Systems: Inverting the Usual Scheme

    Get PDF
    Proceedings of: 11th International Work-Conference on Artificial Neural Networks (IWANN 2011). International Workshop of Intelligent systems for context-based information fusion (ISCIF 11). Torremolinos-Málaga, Spain, June 8-10, 2011Many works on context-aware systems make use of location, navigation or tracking services offered by an underlying sensor fusion module, as part of the relevant contextual information. The obtained knowledge is typically consumed only by the high level layers of the system, in spite that context itself represents a valuable source of information from which every part of the implemented system could take benefit. This paper closes the loop, analyzing how can context knowledge be applied to improve the accuracy, robustness and adaptability of sensor fusion processes. The whole theoretical analysis will be related with the indoor/outdoor navigation system implemented for a wheeled robotic platform. Some preliminary results are presented, where the context information provided by a map is integrated in the sensor fusion system.This work was supported in part by Projects ATLANTIDA, CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, SINPROB, CAM MADRINET S-0505/TIC/0255 DPS2008-07029-C02-02.Publicad
    corecore